15 research outputs found
Context-aware Captions from Context-agnostic Supervision
We introduce an inference technique to produce discriminative context-aware
image captions (captions that describe differences between images or visual
concepts) using only generic context-agnostic training data (captions that
describe a concept or an image in isolation). For example, given images and
captions of "siamese cat" and "tiger cat", we generate language that describes
the "siamese cat" in a way that distinguishes it from "tiger cat". Our key
novelty is that we show how to do joint inference over a language model that is
context-agnostic and a listener which distinguishes closely-related concepts.
We first apply our technique to a justification task, namely to describe why an
image contains a particular fine-grained category as opposed to another
closely-related category of the CUB-200-2011 dataset. We then study
discriminative image captioning to generate language that uniquely refers to
one of two semantically-similar images in the COCO dataset. Evaluations with
discriminative ground truth for justification and human studies for
discriminative image captioning reveal that our approach outperforms baseline
generative and speaker-listener approaches for discrimination.Comment: Accepted to CVPR 2017 (Spotlight
Embarassingly Simple Dataset Distillation
Dataset distillation extracts a small set of synthetic training samples from
a large dataset with the goal of achieving competitive performance on test data
when trained on this sample. In this work, we tackle dataset distillation at
its core by treating it directly as a bilevel optimization problem.
Re-examining the foundational back-propagation through time method, we study
the pronounced variance in the gradients, computational burden, and long-term
dependencies. We introduce an improved method: Random Truncated Backpropagation
Through Time (RaT-BPTT) to address them. RaT-BPTT incorporates a truncation
coupled with a random window, effectively stabilizing the gradients and
speeding up the optimization while covering long dependencies. This allows us
to establish new state-of-the-art for a variety of standard dataset benchmarks.
A deeper dive into the nature of distilled data unveils pronounced
intercorrelation. In particular, subsets of distilled datasets tend to exhibit
much worse performance than directly distilled smaller datasets of the same
size. Leveraging RaT-BPTT, we devise a boosting mechanism that generates
distilled datasets that contain subsets with near optimal performance across
different data budgets.Comment: Short version appears at NeurIPS 2023 WANT worksho
CIDEr: Consensus-based Image Description Evaluation
Automatically describing an image with a sentence is a long-standing
challenge in computer vision and natural language processing. Due to recent
progress in object detection, attribute classification, action recognition,
etc., there is renewed interest in this area. However, evaluating the quality
of descriptions has proven to be challenging. We propose a novel paradigm for
evaluating image descriptions that uses human consensus. This paradigm consists
of three main parts: a new triplet-based method of collecting human annotations
to measure consensus, a new automated metric (CIDEr) that captures consensus,
and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences
describing each image. Our simple metric captures human judgment of consensus
better than existing metrics across sentences generated by various sources. We
also evaluate five state-of-the-art image description approaches using this new
protocol and provide a benchmark for future comparisons. A version of CIDEr
named CIDEr-D is available as a part of MS COCO evaluation server to enable
systematic evaluation and benchmarking.Comment: To appear in CVPR 201
Hyperbolic Image-Text Representations
Visual and linguistic concepts naturally organize themselves in a hierarchy,
where a textual concept ``dog'' entails all images that contain dogs. Despite
being intuitive, current large-scale vision and language models such as CLIP do
not explicitly capture such hierarchy. We propose MERU, a contrastive model
that yields hyperbolic representations of images and text. Hyperbolic spaces
have suitable geometric properties to embed tree-like data, so MERU can better
capture the underlying hierarchy in image-text data. Our results show that MERU
learns a highly interpretable representation space while being competitive with
CLIP's performance on multi-modal tasks like image classification and
image-text retrieval.Comment: Technical repor